Ivanov Alexander Ivanovich, doctor of technical sciences, associate professor, senior researcher, Penza Scientific Research Electrotechnical Institute (440026, 9 Sovetskaya street, Penza, Russia), Е-mail: firstname.lastname@example.org
Kupriyanov Evgeny Nikolaevich, postgraduate student, Penza State University (440026, 40 Krasnaya street, Penza, Russia), E-mail: email@example.com
Savinov Konstantin Nikolayevich, senior lecturer, department of automated control systems and satellite communications, Military Training Center, Penza State University (440026, 40 Krasnaya street, Penza, Russia), E-mail: firstname.lastname@example.org
Bannykh Andrey Grigoryevich, postgraduate student, Penza State University (440026, 40 Krasnaya street, Penza, Russia), Е-mail: email@example.com
Bezjaev Alexander Victorovich, candidate of technical sciences, lead specialist, STC “Atlas” Penza branch (440026, 9 Sovetskaya street, Penza, Russia), E mail: Bezyaev_Alex@mail.ru
Background. The aim of the work is to describe the main theoretical provisions that allow us to estimate the confidence probabilities of joint solutions of 7 neurons that reproduce the work of the chi-square criterion, two Cramer-von Mises criteria, two versions of the Anderson-Darling criterion, the Shapiro-Wilk criterion, the geometric mean criterion.
Materials and methods. Each of the existing statistical criteria can be assigned its own artificial neuron. Neurons are tuned so that errors of the first and second kind when separating normal data and data with a uniform distribution are equally probable.
Results. A pre-trained network of 7 artificial neurons gives a fictitious increase in the size of the test sample from 21 to 29 examples, in comparison with the use of a single chisquare neuron. If we remain within the framework of linear forecasting, then an increase in the number of artificial neurons to 70 should lead to an equivalent increase in the initial sample to 101 experiments or an almost 5-fold decrease in the error probabilities from 0.32 to 0.06 for the chi-squared neuron.
Conclusions. A neural network trained to recognize small samples of normal data can be considered as some mathematical molecule with 128 output spectral lines. The gain from the neural network of several statistical criteria is due to the calculations at the boundary transition between the continuous probability spectrum and its analogue, the discrete line spectrum of the probability amplitudes of small samples.